239 research outputs found

    Block Sensitivity of Minterm-Transitive Functions

    Get PDF
    Boolean functions with symmetry properties are interesting from a complexity theory perspective; extensive research has shown that these functions, if nonconstant, must have high `complexity' according to various measures. In recent work of this type, Sun gave bounds on the block sensitivity of nonconstant Boolean functions invariant under a transitive permutation group. Sun showed that all such functions satisfy bs(f) = Omega(N^{1/3}), and that there exists such a function for which bs(f) = O(N^{3/7}ln N). His example function belongs to a subclass of transitively invariant functions called the minterm-transitive functions (defined in earlier work by Chakraborty). We extend these results in two ways. First, we show that nonconstant minterm-transitive functions satisfy bs(f) = Omega(N^{3/7}). Thus Sun's example function has nearly minimal block sensitivity for this subclass. Second, we give an improved example: a minterm-transitive function for which bs(f) = O(N^{3/7}ln^{1/7}N).Comment: 10 page

    Multitask Efficiencies in the Decision Tree Model

    Get PDF
    In Direct Sum problems [KRW], one tries to show that for a given computational model, the complexity of computing a collection of finite functions on independent inputs is approximately the sum of their individual complexities. In this paper, by contrast, we study the diversity of ways in which the joint computational complexity can behave when all the functions are evaluated on a common input. We focus on the deterministic decision tree model, with depth as the complexity measure; in this model we prove a result to the effect that the 'obvious' constraints on joint computational complexity are essentially the only ones. The proof uses an intriguing new type of cryptographic data structure called a `mystery bin' which we construct using a small polynomial separation between deterministic and unambiguous query complexity shown by Savicky. We also pose a variant of the Direct Sum Conjecture of [KRW] which, if proved for a single family of functions, could yield an analogous result for models such as the communication model.Comment: Improved exposition based on conference versio

    The Power of Unentanglement

    Get PDF
    The class QMA(k). introduced by Kobayashi et al., consists of all languages that can be verified using k unentangled quantum proofs. Many of the simplest questions about this class have remained embarrassingly open: for example, can we give any evidence that k quantum proofs are more powerful than one? Does QMA(k) = QMA(2) for k ≄ 2? Can QMA(k) protocols be amplified to exponentially small error? In this paper, we make progress on all of the above questions. * We give a protocol by which a verifier can be convinced that a 3SAT formula of size m is satisfiable, with constant soundness, given Õ (√m) unentangled quantum witnesses with O(log m) qubits each. Our protocol relies on the existence of very short PCPs. * We show that assuming a weak version of the Additivity Conjecture from quantum information theory, any QMA(2) protocol can be amplified to exponentially small error, and QMA(k) = QMA(2) for all k ≄ 2. * We prove the nonexistence of "perfect disentanglers" for simulating multiple Merlins with one

    Exponential Time Paradigms Through the Polynomial Time Lens

    Get PDF
    We propose a general approach to modelling algorithmic paradigms for the exact solution of NP-hard problems. Our approach is based on polynomial time reductions to succinct versions of problems solvable in polynomial time. We use this viewpoint to explore and compare the power of paradigms such as branching and dynamic programming, and to shed light on the true complexity of various problems. As one instantiation, we model branching using the notion of witness compression, i.e., reducibility to the circuit satisfiability problem parameterized by the number of variables of the circuit. We show this is equivalent to the previously studied notion of `OPP-algorithms\u27, and provide a technique for proving conditional lower bounds for witness compressions via a constructive variant of AND-composition, which is a notion previously studied in theory of preprocessing. In the context of parameterized complexity we use this to show that problems such as Pathwidth and Treewidth and Independent Set parameterized by pathwidth do not have witness compression, assuming NP subseteq coNP/poly. Since these problems admit fast fixed parameter tractable algorithms via dynamic programming, this shows that dynamic programming can be stronger than branching, under a standard complexity hypothesis. Our approach has applications outside parameterized complexity as well: for example, we show if a polynomial time algorithm outputs a maximum independent set of a given planar graph on n vertices with probability exp(-n^{1-epsilon}) for some epsilon>0, then NP subseteq coNP/poly. This negative result dims the prospects for one very natural approach to sub-exponential time algorithms for problems on planar graphs. As two other illustrations (more exploratory) of our approach, we model algorithms based on inclusion-exclusion or group algebras via the notion of "parity compression", and we model a subclass of dynamic programming algorithms with the notion of "disjunctive dynamic programming". These models give us a way to naturally classify various parameterized problems with FPT algorithms. In the case of the dynamic programming model, we show that Independent Set parameterized by pathwidth is complete for this model

    Limitations of Lower-Bound Methods for the Wire Complexity of Boolean Operators

    Get PDF

    The Power of Many Samples in Query Complexity

    Get PDF
    The randomized query complexity R(f)R(f) of a boolean function f ⁣:{0,1}n→{0,1}f\colon\{0,1\}^n\to\{0,1\} is famously characterized (via Yao's minimax) by the least number of queries needed to distinguish a distribution D0D_0 over 00-inputs from a distribution D1D_1 over 11-inputs, maximized over all pairs (D0,D1)(D_0,D_1). We ask: Does this task become easier if we allow query access to infinitely many samples from either D0D_0 or D1D_1? We show the answer is no: There exists a hard pair (D0,D1)(D_0,D_1) such that distinguishing D0∞D_0^\infty from D1∞D_1^\infty requires Θ(R(f))\Theta(R(f)) many queries. As an application, we show that for any composed function f∘gf\circ g we have R(f∘g)≄Ω(fbs(f)R(g))R(f\circ g) \geq \Omega(\mathrm{fbs}(f)R(g)) where fbs\mathrm{fbs} denotes fractional block sensitivity.Comment: 16 page

    The complexity of joint computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 253-266).Joint computation is the ubiquitous scenario in which a computer is presented with not one, but many computational tasks to perform. A fundamental question arises: when can we cleverly combine computations, to perform them with greater efficiency or reliability than by tackling them separately? This thesis investigates the power and, especially, the limits of efficient joint computation, in several computational models: query algorithms, circuits, and Turing machines. We significantly improve and extend past results on limits to efficient joint computation for multiple independent tasks; identify barriers to progress towards better circuit lower bounds for multiple-output operators; and begin an original line of inquiry into the complexity of joint computation. In more detail, we make contributions in the following areas: Improved direct product theorems for randomized query complexity: The "direct product problem" seeks to understand how the difficulty of computing a function on each of k independent inputs scales with k. We prove the following direct product theorem (DPT) for query complexity: if every T-query algorithm has success probability at most 1-[epsilon] in computing the Boolean function f on input distribution [mu], then for [alpha] 0, the worst-case success probability of any [alpha]R₂(f)k-query randomized algorithm for f k falls exponentially with k. The best previous statement of this type, due to Klauck, Spalek, and de Wolf, required a query bound of O(bs(f)k). Our proof technique involves defining and analyzing a collection of martingales associated with an algorithm attempting to solve f*k. Our method is quite general and yields a new XOR lemma and threshold DPT for the query model, as well as DPTs for the query complexity of learning tasks, search problems, and tasks involving interaction with dynamic entities. We also give a version of our DPT in which decision tree size is the resource of interest. Joint complexity in the Decision Tree Model: We study the diversity of possible behaviors of the joint computational complexity of a collection f1,... , fk of Boolean functions over a shared input. We focus on the deterministic decision tree model, with depth as the complexity measure; in this model, we prove a result to the effect that the "obvious" constraints on joint computational complexity are essentially the only ones. The proof uses an intriguing new type of cryptographic data structure called a "mystery bin," which we construct using a polynomial separation between deterministic and unambiguous query complexity shown by SavickĂœ. We also pose a conjecture in the communication model which, if proved, would extend our result to that model. Limitations of Lower-Bound Methods for the Wire Complexity of Boolean Operators: We study the circuit complexity of Boolean operators, i.e., collections of Boolean functions defined over a common input. Our focus is the well-studied model in which arbitrary Boolean functions are allowed as gates, and in which a circuit's complexity is measured by its depth and number of wires. We show sharp limitations of several existing lower-bound methods for this model. First, we study an information-theoretic lower-bound method due to Cherukhin, which gave the first improvement over the lower bounds provided by the well-known superconcentrator technique for constant depths. (The lower bounds are still barelysuperlinear, however) Cherukhin's method was formalized by Jukna as a general lower-bound criterion for Boolean operators, the "Strong Multiscale Entropy" (SME) property. It seemed plausible that this property could imply significantly better lower bounds by an improved analysis. However, we show that this is not the case, by exhibiting an explicit operator with the SME property that is computable in constant depths whose wire-complexity essentially matches the Cherukhin-Jukna lower bound (to within a constant multiplicative factor, for depths d = 2,3 and for even depths d >/= 6). Next, we show limitations of two simpler lower-bound criteria given by Jukna: the "entropy method" for general operators, and the "pairwise-distance method" for linear operators. We show that neither method gives super-linear lower bounds for depth 3. In the process, we obtain the first known polynomial separation between the depth-2 and depth-3 wire complexities for an explicit operator. We also continue the study (initiated by Jukna) of the complexity of "representing" a linear operator by bounded-depth circuits, a weaker notion than computing the operator. New limits to classical and quantum instance compression: Given an instance of a decision problem that is too difficult to solve outright, we may aim for the more limited goal of compressing that instance into a smaller, equivalent instance of the same or a different problem. As a representative problem, say we are given Boolean formulas [psi]1,... ,[psi]t, each of length n << t, and we want to determine if at least one [psi]j is satisfiable. Can we efficiently reduce this "OR-SAT" question to an equivalent problem instance (of SAT or another problem) of size poly(n), independent of t? We call any such reduction a "strong compression" reduction for OR-SAT. This would amount to a major gain from compressing [psi]1,. .. , [psi]t jointly, since we know of no way to reliably compress an individual SAT instance. Harnik and Naor (FOCS '06/SICOMP '10) and Bodlaender, Downey, Fellows, and Hermelin (ICALP '08/JCSS '09) showed that the infeasibility of strong compression for OR-SAT would also imply limits to instance compression schemes for a large number of other, natural problems; this is significant because instance compression is a central technique in the design of so-called fixed-parameter tractable algorithms. Bodlaender et al. also showed that the infeasibility of strong compression for the analogous "AND-SAT" problem would establish limits to instance compression for another family of problems. Fortnow and Santhanam (STOC '08) showed that deterministic (or 1-sided error randomized) strong compression for OR-SAT is not possible unless NP C coNP/poly; the case of AND-SAT remained mysterious. We give new and improved evidence against strong compression schemes for both OR-SAT and AND-SAT; our method applies to probabilistic compression schemes with 2-sided error. We also give versions of these results for an analogous task of quantum instance compression, in which a polynomial-time quantum reduction must output a quantum state that, in an appropriate sense, "preserves the answer" to the input instance. We give quantitatively similar evidence against strong compression for AND- and OR-SAT in this setting, albeit under less well-studied hypotheses about the relationship between NP and quantum complexity classes. To prove all of these results, we exploit the information bottleneck of an instance compression scheme, using a new method to "disguise" information being fed into a compressive mapping.by Andrew Donald Drucker.Ph.D
    • 

    corecore